sometimes I need extremely quick responses for stuff that isnt cookie cutter, i.e. architect/ engineer says 'oh I dont know we need to see what the plan checker says'
It may be an option in the future. In my opinion, it's still not accurate enough to do this. Narrow AI doesn't actually know what it's being asked. It's basically a Chinese Room situation.
The narrow AI literally looks at vast amounts of data and says 'statistically, this word should follow that word'.
If you want to see a real world example of where this has caused issues look at UpCodes. They use a narrow AI system and have been known to get some really inaccurate responses / cross references.
I generally only use it to get a quick starting point or reference code section, then I go into the actual code and check/confirm what ChatGPT is telling me. I've noticed its about 50/50. I would still never take anything AI told me and base the designs off it before vetting.
This is how it should be used - point you in the right direction to vet quicker. I doubt it will ever go past that initial step given local amendments, contradictory requirements in other codes that need followed, etc...
thats basically what Im wanting to use it for while architect is reaching out to the city. But it still sucks for that purpose but based on what ppl say here, its not even good for that
I had a conversation with a client who was all in. She would have it write executive narratives of her voice notes. She also scheduled staff on it but stated emphatically “ChatGPT cannot do math!”. I was surprised to hear how enthusiastic she was.
In the same conversation, my struct eng piped up and said that he had used it to do RFP searches that saved him a bunch of time.
Sep 16, 24 11:07 am ·
·
OddArchitect
I'm not. However I've used to write first drafts of proposal descriptions. I'm considering subscribing to midjourney or something similar just to 'play around' with it.
Sep 16, 24 12:19 pm ·
·
bowling_ball
We use it to draft proposals. The quality of the writing has improved over time, but not impressively and not quickly. It's a tool like any other
You might, maybe, kinda-sorta, possibly trust an LLM which is specifically trained on code language to answer code questions for you, but only if you double check them yourself. There are a couple of those out there. You have to pay for them.
yeh that prob makes sense, wont work well if its a generalist LLM. Guess tahts why the answers are terrible. Wasnt even making contact w the ball on the questions I had.
Used chatGPT to help me understand some tricky hazardous material code and after giving it multiple prompts it really started breaking things down and making connections that I was able to extrapolate from. Use it as an aid and see how it goes, ironically it's you own understanding of the code that helps you prompt the questions that get it to not just generate generic BS.
You can use it to generally lead you in right direction or at least find out which chapter or which general section of the code your answer is, but always refer to the actual legal code for actual answers, not chat gpt or upcodes or anything of that nature.
No, it is not good at code stuff. You can start there but that's it. Code is a lot more nuanced than chat bots can pick up on! I've tried many times and sometimes it helps but usually just sends me on a tangent that I have to correct. So even just using it to start can waste more time than it saves. As architects we should know where things are in the codes. Don't you have all that memorized? Chapter 10 egress, etc.
Sep 21, 24 10:44 am ·
·
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.
Using chatgpt/ AI for code questions
Anyone have any success doing this or is this something chatgpt is not up to speed on?
I would think in theory it would be of some help but maybe Im just giving the wrong prompts
why outsource one of our basic services to AI?
sometimes I need extremely quick responses for stuff that isnt cookie cutter, i.e. architect/ engineer says 'oh I dont know we need to see what the plan checker says'
You need to work with better professionals.
can the ai quote their source, like say look in chapter 1010, so you can verify what it's telling you?
ive been sent chatgpt answers to code questions. Its bad.
It may be an option in the future. In my opinion, it's still not accurate enough to do this. Narrow AI doesn't actually know what it's being asked. It's basically a Chinese Room situation.
The narrow AI literally looks at vast amounts of data and says 'statistically, this word should follow that word'.
If you want to see a real world example of where this has caused issues look at UpCodes. They use a narrow AI system and have been known to get some really inaccurate responses / cross references.
I generally only use it to get a quick starting point or reference code section, then I go into the actual code and check/confirm what ChatGPT is telling me. I've noticed its about 50/50. I would still never take anything AI told me and base the designs off it before vetting.
This is how it should be used - point you in the right direction to vet quicker. I doubt it will ever go past that initial step given local amendments, contradictory requirements in other codes that need followed, etc...
thats basically what Im wanting to use it for while architect is reaching out to the city. But it still sucks for that purpose but based on what ppl say here, its not even good for that
so, are you people paying for ChatGPT?
I had a conversation with a client who was all in. She would have it write executive narratives of her voice notes. She also scheduled staff on it but stated emphatically “ChatGPT cannot do math!”. I was surprised to hear how enthusiastic she was.
In the same conversation, my struct eng piped up and said that he had used it to do RFP searches that saved him a bunch of time.
I'm not. However I've used to write first drafts of proposal descriptions. I'm considering subscribing to midjourney or something similar just to 'play around' with it.
We use it to draft proposals. The quality of the writing has improved over time, but not impressively and not quickly. It's a tool like any other
any specific search on google now is powered by some kind of AI. They call it generative AI.
You might, maybe, kinda-sorta, possibly trust an LLM which is specifically trained on code language to answer code questions for you, but only if you double check them yourself. There are a couple of those out there. You have to pay for them.
A general LLM like ChatGPT? LMAO no.
yeh that prob makes sense, wont work well if its a generalist LLM. Guess tahts why the answers are terrible. Wasnt even making contact w the ball on the questions I had.
Used chatGPT to help me understand some tricky hazardous material code and after giving it multiple prompts it really started breaking things down and making connections that I was able to extrapolate from. Use it as an aid and see how it goes, ironically it's you own understanding of the code that helps you prompt the questions that get it to not just generate generic BS.
You can use it to generally lead you in right direction or at least find out which chapter or which general section of the code your answer is, but always refer to the actual legal code for actual answers, not chat gpt or upcodes or anything of that nature.
No, it is not good at code stuff. You can start there but that's it. Code is a lot more nuanced than chat bots can pick up on! I've tried many times and sometimes it helps but usually just sends me on a tangent that I have to correct. So even just using it to start can waste more time than it saves. As architects we should know where things are in the codes. Don't you have all that memorized? Chapter 10 egress, etc.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.